虚拟现实(VR)耳机提供了一种身临其境的立体视觉体验,但以阻止用户直接观察其物理环境的代价。传递技术旨在通过利用向外的摄像头来重建否则没有耳机的用户可以看到的图像来解决此限制。这本质上是一个实时视图综合挑战,因为传递摄像机不能与眼睛进行物理共同。现有的通行技术会遭受分散重建工件的注意力,这主要是由于缺乏准确的深度信息(尤其是对于近场和分离的物体),并且表现出有限的图像质量(例如,低分辨率和单色)。在本文中,我们提出了第一种学习的传递方法,并使用包含立体声对RGB摄像机的自定义VR耳机评估其性能。通过模拟和实验,我们证明了我们所学的传递方法与最先进的方法相比提供了卓越的图像质量,同时满足了实时的,透视透视的立体视图综合的严格VR要求,从而在广泛的视野上综合用于桌面连接的耳机。
translated by 谷歌翻译
Solute transport in porous media is relevant to a wide range of applications in hydrogeology, geothermal energy, underground CO2 storage, and a variety of chemical engineering systems. Due to the complexity of solute transport in heterogeneous porous media, traditional solvers require high resolution meshing and are therefore expensive computationally. This study explores the application of a mesh-free method based on deep learning to accelerate the simulation of solute transport. We employ Physics-informed Neural Networks (PiNN) to solve solute transport problems in homogeneous and heterogeneous porous media governed by the advection-dispersion equation. Unlike traditional neural networks that learn from large training datasets, PiNNs only leverage the strong form mathematical models to simultaneously solve for multiple dependent or independent field variables (e.g., pressure and solute concentration fields). In this study, we construct PiNN using a periodic activation function to better represent the complex physical signals (i.e., pressure) and their derivatives (i.e., velocity). Several case studies are designed with the intention of investigating the proposed PiNN's capability to handle different degrees of complexity. A manual hyperparameter tuning method is used to find the best PiNN architecture for each test case. Point-wise error and mean square error (MSE) measures are employed to assess the performance of PiNNs' predictions against the ground truth solutions obtained analytically or numerically using the finite element method. Our findings show that the predictions of PiNN are in good agreement with the ground truth solutions while reducing computational complexity and cost by, at least, three orders of magnitude.
translated by 谷歌翻译
Vulnerability to adversarial attacks is a well-known weakness of Deep Neural Networks. While most of the studies focus on natural images with standardized benchmarks like ImageNet and CIFAR, little research has considered real world applications, in particular in the medical domain. Our research shows that, contrary to previous claims, robustness of chest x-ray classification is much harder to evaluate and leads to very different assessments based on the dataset, the architecture and robustness metric. We argue that previous studies did not take into account the peculiarity of medical diagnosis, like the co-occurrence of diseases, the disagreement of labellers (domain experts), the threat model of the attacks and the risk implications for each successful attack. In this paper, we discuss the methodological foundations, review the pitfalls and best practices, and suggest new methodological considerations for evaluating the robustness of chest xray classification models. Our evaluation on 3 datasets, 7 models, and 18 diseases is the largest evaluation of robustness of chest x-ray classification models.
translated by 谷歌翻译
HTR models development has become a conventional step for digital humanities projects. The performance of these models, often quite high, relies on manual transcription and numerous handwritten documents. Although the method has proven successful for Latin scripts, a similar amount of data is not yet achievable for scripts considered poorly-endowed, like Arabic scripts. In that respect, we are introducing and assessing a new modus operandi for HTR models development and fine-tuning dedicated to the Arabic Maghrib{\=i} scripts. The comparison between several state-of-the-art HTR demonstrates the relevance of a word-based neural approach specialized for Arabic, capable to achieve an error rate below 5% with only 10 pages manually transcribed. These results open new perspectives for Arabic scripts processing and more generally for poorly-endowed languages processing. This research is part of the development of RASAM dataset in partnership with the GIS MOMM and the BULAC.
translated by 谷歌翻译
The emergence of COVID-19 has had a global and profound impact, not only on society as a whole, but also on the lives of individuals. Various prevention measures were introduced around the world to limit the transmission of the disease, including face masks, mandates for social distancing and regular disinfection in public spaces, and the use of screening applications. These developments also triggered the need for novel and improved computer vision techniques capable of (i) providing support to the prevention measures through an automated analysis of visual data, on the one hand, and (ii) facilitating normal operation of existing vision-based services, such as biometric authentication schemes, on the other. Especially important here, are computer vision techniques that focus on the analysis of people and faces in visual data and have been affected the most by the partial occlusions introduced by the mandates for facial masks. Such computer vision based human analysis techniques include face and face-mask detection approaches, face recognition techniques, crowd counting solutions, age and expression estimation procedures, models for detecting face-hand interactions and many others, and have seen considerable attention over recent years. The goal of this survey is to provide an introduction to the problems induced by COVID-19 into such research and to present a comprehensive review of the work done in the computer vision based human analysis field. Particular attention is paid to the impact of facial masks on the performance of various methods and recent solutions to mitigate this problem. Additionally, a detailed review of existing datasets useful for the development and evaluation of methods for COVID-19 related applications is also provided. Finally, to help advance the field further, a discussion on the main open challenges and future research direction is given.
translated by 谷歌翻译
大数据和深度学习的结合是一项破坏世界的技术,如果正确使用,可以极大地影响任何目标。随着深度学习技术中大量医疗保健数据集和进步的可用性,系统现在可以很好地预测任何健康问题的未来趋势。从文献调查中,我们发现SVM用于预测心力衰竭的情况,而无需关联客观因素。利用电子健康记录(EHR)中重要历史信息的强度,我们利用长期记忆(LSTM)建立了一个智能和预测的模型,并根据该健康记录预测心力衰竭的未来趋势。因此,这项工作的基本承诺是使用基于患者的电子药用信息的LSTM来预测心脏的失败。我们已经分析了一个数据集,该数据集包含在Faisalabad心脏病学研究所和Faisalabad(巴基斯坦旁遮普邦)的盟军医院收集的299例心力衰竭患者的病历。这些患者由105名女性和194名男性组成,年龄在40岁和95岁之间。该数据集包含13个功能,这些功能报告了负责心力衰竭的临床,身体和生活方式信息。我们发现我们的分析趋势越来越多,这将有助于促进心中预测领域的知识。
translated by 谷歌翻译
狗主人通常能够识别出揭示其狗的主观状态的行为线索,例如疼痛。但是自动识别疼痛状态非常具有挑战性。本文提出了一种基于视频的新型,两流深的神经网络方法,以解决此问题。我们提取和预处理身体关键点,并在视频中计算关键点和RGB表示的功能。我们提出了一种处理自我十分和缺少关键点的方法。我们还提出了一个由兽医专业人员收集的独特基于视频的狗行为数据集,并注释以进行疼痛,并通过建议的方法报告良好的分类结果。这项研究是基于机器学习的狗疼痛状态估计的第一批作品之一。
translated by 谷歌翻译
加速生物序列设计的能力可以对医疗领域的进度产生重大影响。该问题可以作为一个全球优化问题,在该问题中,该目标是昂贵的黑盒功能,因此我们可以查询大量限制,并限制较少的回合。贝叶斯优化是解决此问题的原则方法。然而,生物序列的天文范围较大的状态空间使所有可能的序列都在不可行。在本文中,我们提出了Metarlbo,在其中我们通过元强化学习训练自回归的生成模型,以提出有希望的序列,以通过贝叶斯优化选择。我们提出了这个问题,因为它是在上一轮中获取的数据的采样子集引起的MDP分布上找到最佳策略的问题。我们的内部实验表明,与现有强大基准相比,对此类合奏的元学习提供了鲁棒性,可抵抗奖励错误指定和实现竞争成果。
translated by 谷歌翻译
众所周知,图形神经网络(GNN)的成功高度依赖于丰富的人类通知数据,这在实践中努力获得,并且并非总是可用的。当只有少数标记的节点可用时,如何开发高效的GNN仍在研究。尽管已证明自我训练对于半监督学习具有强大的功能,但其在图形结构数据上的应用可能会失败,因为(1)不利用较大的接收场来捕获远程节点相互作用,这加剧了传播功能的难度 - 标记节点到未标记节点的标签模式; (2)有限的标记数据使得在不同节点类别中学习良好的分离决策边界而不明确捕获基本的语义结构,这是一项挑战。为了解决捕获信息丰富的结构和语义知识的挑战,我们提出了一个新的图数据增强框架,AGST(增强图自训练),该框架由两个新的(即结构和语义)增强模块构建。 GST骨干。在这项工作中,我们研究了这个新颖的框架是否可以学习具有极有限标记节点的有效图预测模型。在有限标记节点数据的不同情况下,我们对半监督节点分类进行全面评估。实验结果证明了新的数据增强框架对节点分类的独特贡献,几乎没有标记的数据。
translated by 谷歌翻译
这项研究是有关阿拉伯历史文档的光学特征识别(OCR)的一系列研究的第二阶段,并研究了不同的建模程序如何与问题相互作用。第一项研究研究了变压器对我们定制的阿拉伯数据集的影响。首次研究的弊端之一是训练数据的规模,由于缺乏资源,我们的3000万张图像中仅15000张图像。另外,我们添加了一个图像增强层,时间和空间优化和后校正层,以帮助该模型预测正确的上下文。值得注意的是,我们提出了一种使用视觉变压器作为编码器的端到端文本识别方法,即BEIT和Vanilla Transformer作为解码器,消除了CNNs以进行特征提取并降低模型的复杂性。实验表明,我们的端到端模型优于卷积骨架。该模型的CER为4.46%。
translated by 谷歌翻译